5 research outputs found

    Learning to Flip Successive Cancellation Decoding of Polar Codes with LSTM Networks

    Full text link
    The key to successive cancellation (SC) flip decoding of polar codes is to accurately identify the first error bit. The optimal flipping strategy is considered difficult due to lack of an analytical solution. Alternatively, we propose a deep learning aided SC flip algorithm. Specifically, before each SC decoding attempt, a long short-term memory (LSTM) network is exploited to either (i) locate the first error bit, or (ii) undo a previous `wrong' flip. In each SC attempt, the sequence of log likelihood ratios (LLRs) derived in the previous SC attempt is exploited to decide which action to take. Accordingly, a two-stage training method of the LSTM network is proposed, i.e., learn to locate first error bits in the first stage, and then to undo `wrong' flips in the second stage. Simulation results show that the proposed approach identifies error bits more accurately and achieves better performance than the state-of-the-art SC flip algorithms.Comment: 5 pages, 7 figure

    Predicting the Mumble of Wireless Channel with Sequence-to-Sequence Models

    Full text link
    Accurate prediction of fading channel in the upcoming transmission frame is essential to realize adaptive transmission for transmitters, and receivers with the ability of channel prediction can also save some computations of channel estimation. However, due to the rapid channel variation and channel estimation error, reliable prediction is hard to realize. In this situation, an appropriate channel model should be selected, which can cover both the statistical model and small scale fading of channel, this reminds us the natural languages, which also have statistical word frequency and specific sentences. Accordingly, in this paper, we take wireless channel model as a language model, and the time-varying channel as talking in this language, while the realistic noisy estimated channel can be compared with mumbling. Furthermore, in order to utilize as much as possible the information a channel coefficient takes, we discard the conventional two features of absolute value and phase, replacing with hundreds of features which will be learned by our channel model, to do this, we use a vocabulary to map a complex channel coefficient into an ID, which is represented by a vector of real numbers. Recurrent neural networks technique is used as its good balance between memorization and generalization, moreover, we creatively introduce sequence-to-sequence (seq2seq) models in time series channel prediction, which can translates past channel into future channel. The results show that realistic channel prediction with superior performance relative to channel estimation is attainable.Comment: 7 pages, 7 figures, updated figure 6&7, added reference

    Realistic Channel Models Pre-training

    Full text link
    In this paper, we propose a neural-network-based realistic channel model with both the similar accuracy as deterministic channel models and uniformity as stochastic channel models. To facilitate this realistic channel modeling, a multi-domain channel embedding method combined with self-attention mechanism is proposed to extract channel features from multiple domains simultaneously. This 'one model to fit them all' solution employs available wireless channel data as the only data set for self-supervised pre-training. With the permission of users, network operators or other organizations can make use of some available user specific data to fine-tune this pre-trained realistic channel model for applications on channel-related downstream tasks. Moreover, even without fine-tuning, we show that the pre-trained realistic channel model itself is a great tool with its understanding of wireless channel.Comment: 6 pages, 5 figure

    Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning

    Full text link
    In this paper, the downlink packet scheduling problem for cellular networks is modeled, which jointly optimizes throughput, fairness and packet drop rate. Two genie-aided heuristic search methods are employed to explore the solution space. A deep reinforcement learning (DRL) framework with A2C algorithm is proposed for the optimization problem. Several methods have been utilized in the framework to improve the sampling and training efficiency and to adapt the algorithm to a specific scheduling problem. Numerical results show that DRL outperforms the baseline algorithm and achieves similar performance as genie-aided methods without using the future information.Comment: submitted to WCNC202

    A Flip-Syndrome-List Polar Decoder Architecture for Ultra-Low-Latency Communications

    Full text link
    We consider practical hardware implementation of Polar decoders. To reduce latency due to the serial nature of successive cancellation (SC), existing optimizations improve parallelism with two approaches, i.e., multi-bit decision or reduced path splitting. In this paper, we combine the two procedures into one with an error-pattern-based architecture. It simultaneously generates a set of candidate paths for multiple bits with pre-stored patterns. For rate-1 (R1) or single parity-check (SPC) nodes, we prove that a small number of deterministic patterns are required to guarantee performance preservation. For general nodes, low-weight error patterns are indexed by syndrome in a look-up table and retrieved in O(1) time. The proposed flip-syndrome-list (FSL) decoder fully parallelizes all constituent code blocks without sacrificing performance, thus is suitable for ultra-low-latency applications. Meanwhile, two code construction optimizations are presented to further reduce complexity and improve performance, respectively.Comment: 10 pages, submitted to IEEE Access (Special Issue on Advances in Channel Coding for 5G and Beyond
    corecore